35 research outputs found

    Critical data detection for dynamically adjustable product quality in IIoT-enabled manufacturing

    Get PDF
    The IIoT technologies, due to the widespread use of sensors, generate massive data that are key in providing innovative and efficient industrial management, operation, and product quality control processes. The significance of data has prompted relevant research communities and application developers how to harness the values of these data in secure manufacturing. Critical data analysis, identification of critical factors to improve the manufacturing process and critical data associated with product quality have been investigated in the current literature. However, the current works on product quality control are mainly based on static data analysis, where data may change, but there is no way to adjust them dynamically. Thus, they are not applicable for product quality control, at which point their adjustment is instantly required. However, many manufacturing systems exist, like beverages and food, where ingredients must be adjusted instantaneously to maintain product quality. To address this research gap, we introduce a method that identifies the critical data based on their ranking by exploiting three criticality assessment criteria that capture the instantaneous product quality change during manufacturing. These three criteria are - (1) correlation, (2) percentage quality change and (3) sensitivity for the assessment of data criticality. The product quality is estimated using polynomial regression (POLY), SVM, and DNN. The proposed method is validated using wine manufacturing data. Our proposed method accurately identifies critical data, where SVM produces the lowest average production quality prediction error (10.40%) compared with that of POLY (11%) and DNN (14.40%). © 2013 IEEE

    Indoor emission sources detection by pollutants interaction analysis

    Get PDF
    This study employs the correlation coefficients technique to support emission sources detection for indoor environments. Unlike existing methods analyzing merely primary pollution, we consider alternatively the secondary pollution (i.e., chemical reactions between pollutants in addition to pollutant level), and calculate intra pollutants correlation coefficients for characterizing and distinguishing emission events. Extensive experiments show that seven major indoor emission sources are identified by the proposed method, including (1) frying canola oil on electric hob, (2) frying olive oil on an electric hob, (3) frying olive oil on a gas hob, (4) spray of household pesticide, (5) lighting a cigarette and allowing it to smoulder, (6) no activities, and (7) venting session. Furthermore, our method improves the detection accuracy by a support vector machine compared to without data filtering and applying typical feature extraction methods such as PCA and LDA. © 2021 by the authors. Licensee MDPI, Basel, Switzerland

    Evaluation of Device-Independent Internet Spatial Location

    Get PDF
    Device-independent Internet spatial location is needed for many purposes, such as data personalisation and social behaviour analysis. Internet spatial databases provide such locations based the IP address of a device. The free to use databases are natively included into many UNIX and Linux operating systems. These systems are predominantly used for e-shops, social networks, and cloud data storage. Using a constructed ground truth dataset, we comprehensively evaluate these databases for null responses, returned country/region/city, and distance error. The created ground truth dataset differs from others by covering cities with both low and high populations and maintaining only devices that follow the rule of one IP address per ISP (Internet Service Provider) and per city. We define two new performance metrics that show the effect of city population and trustworthiness of the results. We also evaluate the databases against an alternative measurement-based approach. We study the reasons behind the results. The data evaluated comes from Europe. The results may be of use for engineers, developers and researchers that use the knowledge of geographical location for related data processing and analysis, such as marketing

    STRATUS: Towards returning data control to cloud users

    Get PDF
    When we upload or create data into the cloud or the web, we immediately lose control of our data. Most of the time, we will not know where the data will be stored, or how many copies of our files are there. Worse, we are unable to know and stop malicious insiders from accessing the possibly sensitive data. Despite being transferred across and within clouds over encrypted channels, data often has to be decrypted within the database for it to be processed. Exposing the data at some point in the cloud to a few privileged users is undoubtedly a vendor-centric approach, and hinges on the trust relationships data owners have with their cloud service providers. A recent example of the abuse of the trust relationship is the high-profile Edward Snowden case. In this paper, we propose a user-centric approach which returns data control to the data owners – empowering users with data provenance, transparency and auditability, homomorphic encryption, situation awareness, revocation, attribution and data resilience. We also cover key elements of the concept of user data control. Finally, we introduce how we attempt to address these issues via the New Zealand Ministry of Business Innovation and Employment (MBIE)-funded STRATUS (Security Technologies Returning Accountability, Trust and User-centric Services in the Cloud) research project

    Transductive Support Vector Machines and Applications in Bioinformatics for Promoter Recognition

    No full text
    Abstract — This paper introduces a novel Transductive Support Vector Machine (TSVM) model and compares it with the traditional inductive SVM on a key problem in Bioinformatics- promoter recognition. While inductive reasoning is concerned with the development of a model (a function) to approximate data from the whole problem space (induction), and consecutively using this model to predict output values for a new input vector (deduction), in the transductive inference systems a model is developed for every new input vector based on some closest to the new vector data from an existing database and this model is used to predict only the output for this vector. The TSVM outperforms by far the inductive SVM models applied on the same problems. Analysis is given on the advantages and disadvantages of the TSVM. Hybrid TSVM-evolving connections systems are discussed as directions for future research
    corecore